Reinforcement Learning in Generating Fuzzy Systems

نویسندگان

  • Yi Zhou
  • Meng Joo Er
چکیده

Fuzzy-logic-based modelling and control is very efficient in dealing with imprecision and nonlinearity [1]. However, the conventional approaches for designing Fuzzy Inference Systems (FISs) are subjective, which require significant human’s efforts. Other than time consuming, the subjective approaches may not be successful if the system is too complex or uncertain. Therefore, many researchers have been seeking automatic methods for generating the FIS [2]. The main issues for designing an FIS are structure identification and parameter estimation. Structure identification is concerned with how to partition the input space and determine the number of fuzzy rules according to the task requirements while parameter estimation involves the determination of parameters for both premises and consequents of fuzzy rules [3]. Structure identification and input classification can be accomplished by Supervised Learning (SL), Unsupervised Learning (UL) and Reinforcement Learning (RL). SL is a learning approach that adopts a supervisor, through which, the training system can adjust the structure and parameters according to a given training data set. In [4], the author provided a paradigm of acquiring the parameters of fuzzy rules. Besides adjusting parameters, self-identified structures have been achieved by SL approaches termed Dynamic Fuzzy Neural Networks in [3] and Generalized Dynamic Fuzzy Neural Networks in [5]. However, the training data are not always available especially when a human being has little knowledge about the system or the system is uncertain. In those situations, UL and RL are preferred over SL as UL and RL are learning processes that do not need any supervisor to tell the learner what action to take. Through RL, those state-action pairs which achieve positive reward will be encouraged in future selections while those which produce negative reward will be discouraged. A number of researchers have applied RL to train the consequent parts of an FIS [6.8]. The preconditioning parts of the FIS are either predefined as in [6] or through the ε-completeness and the squared TD error criteria in [7] or through the “aligned clustering” in [8]. Both DFQL and CQGAF methods achieve online structure identification by creating fuzzy rules when the input space is not well partitioned. However, both methods cannot adjust the premise parameters except when creating new rules. The center position and width of fuzzy neurons are allocated by only considering the input clustering. Moreover, both methods cannot delete fuzzy rules once they are generated even when the rules become redundant. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Exploration and exploitation balance management in fuzzy reinforcement learning

This paper offers a fuzzy balance management scheme between exploration and exploitation, which can be implemented in any critic-only fuzzy reinforcement learning method. The paper, however, focuses on a newly developed continuous reinforcement learning method, called fuzzy Sarsa learning (FSL) due to its advantages. Establishing balance greatly depends on the accuracy of action value function ...

متن کامل

A Self-Generating Neuro-Fuzzy System Through Reinforcements

In this paper, a novel self-generating neuro-fuzzy system through reinforcements is proposed. Not only the weights of the network but also the architecture of the whole network are all learned through reinforcement learning. The proposed neuro-fuzzy system is applied to the inverted pendulum system to demonstrate its performance. Key-words: reinforcement learning, neural network, neuro-fuzzy sy...

متن کامل

Pii: S0165-0114(02)00299-3

In this paper, a new reinforcement learning scheme is developed for a class of serial-link robot arms. Traditional reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment. In the proposed reinforcement learning scheme, an agent is employed to collect signals from a 0xed gain controller, an adaptive critic e...

متن کامل

A new approach to fuzzy classifier systems and its application in self-generating neuro-fuzzy systems

A classifier system is a machine learning system that learns syntactically simple string rules (called classifiers) through a genetic algorithm to guide its performance in an arbitrary environment. In a classifier system, the bucket brigade algorithm is used to solve the problem of credit assignment, which is a critical problem in the field of reinforcement learning. In this paper, we propose a...

متن کامل

Hierarchical Functional Concepts for Knowledge Transfer among Reinforcement Learning Agents

This article introduces the notions of functional space and concept as a way of knowledge representation and abstraction for Reinforcement Learning agents. These definitions are used as a tool of knowledge transfer among agents. The agents are assumed to be heterogeneous; they have different state spaces but share a same dynamic, reward and action space. In other words, the agents are assumed t...

متن کامل

A Reinforcement Learning Algorithm with Evolving Fuzzy Neural Networks

The synergy of the two paradigms, neural network and fuzzy inference system, has given rise to rapidly emerging filed, neuro-fuzzy systems. Evolving neuro-fuzzy systems are intended to use online learning to extract knowledge from data and perform a high-level adaptation of the network structure. We explore the potential of evolving neuro-fuzzy systems in reinforcement learning (RL) application...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012